65 research outputs found

    The Low Rank Approximations and Ritz Values in LSQR For Linear Discrete Ill-Posed Problems

    Full text link
    LSQR and its mathematically equivalent CGLS have been popularly used over the decades for large-scale linear discrete ill-posed problems, where the iteration number kk plays the role of the regularization parameter. It has been long known that if the Ritz values in LSQR converge to the large singular values of AA in natural order until its semi-convergence then LSQR must have the same the regularization ability as the truncated singular value decomposition (TSVD) method and can compute a 2-norm filtering best possible regularized solution. However, hitherto there has been no definitive rigorous result on the approximation behavior of the Ritz values in the context of ill-posed problems. In this paper, for severely, moderately and mildly ill-posed problems, we give accurate solutions of the two closely related fundamental and highly challenging problems on the regularization of LSQR: (i) How accurate are the low rank approximations generated by Lanczos bidiagonalization? (ii) Whether or not the Ritz values involved in LSQR approximate the large singular values of AA in natural order? We also show how to judge the accuracy of low rank approximations reliably during computation without extra cost. Numerical experiments confirm our results.Comment: 30 pages, 9 figures. arXiv admin note: text overlap with arXiv:1608.05907, arXiv:1701.0570

    Regularization Properties of the Krylov Iterative Solvers CGME and LSMR For Linear Discrete Ill-Posed Problems with an Application to Truncated Randomized SVDs

    Full text link
    For the large-scale linear discrete ill-posed problem min⁑βˆ₯Axβˆ’bβˆ₯\min\|Ax-b\| or Ax=bAx=b with bb contaminated by Gaussian white noise, there are four commonly used Krylov solvers: LSQR and its mathematically equivalent CGLS, the Conjugate Gradient (CG) method applied to ATAx=ATbA^TAx=A^Tb, CGME, the CG method applied to min⁑βˆ₯AATyβˆ’bβˆ₯\min\|AA^Ty-b\| or AATy=bAA^Ty=b with x=ATyx=A^Ty, and LSMR, the minimal residual (MINRES) method applied to ATAx=ATbA^TAx=A^Tb. These methods have intrinsic regularizing effects, where the number kk of iterations plays the role of the regularization parameter. In this paper, we establish a number of regularization properties of CGME and LSMR, including the filtered SVD expansion of CGME iterates, and prove that the 2-norm filtering best regularized solutions by CGME and LSMR are less accurate than and at least as accurate as those by LSQR, respectively. We also prove that the semi-convergence of CGME and LSMR always occurs no later and sooner than that of LSQR, respectively. As a byproduct, using the analysis approach for CGME, we improve a fundamental result on the accuracy of the truncated rank kk approximate SVD of AA generated by randomized algorithms, and reveal how the truncation step damages the accuracy. Numerical experiments justify our results on CGME and LSMR.Comment: 30 pages, 7 figure

    On Convergence of the Inexact Rayleigh Quotient Iteration with MINRES

    Full text link
    For the Hermitian inexact Rayleigh quotient iteration (RQI), we present a new general theory, independent of iterative solvers for shifted inner linear systems. The theory shows that the method converges at least quadratically under a new condition, called the uniform positiveness condition, that may allow inner tolerance ΞΎkβ‰₯1\xi_k\geq 1 at outer iteration kk and can be considerably weaker than the condition ΞΎk≀ξ<1\xi_k\leq\xi<1 with ΞΎ\xi a constant not near one commonly used in literature. We consider the convergence of the inexact RQI with the unpreconditioned and tuned preconditioned MINRES method for the linear systems. Some attractive properties are derived for the residuals obtained by MINRES. Based on them and the new general theory, we make a more refined analysis and establish a number of new convergence results. Let βˆ₯rkβˆ₯\|r_k\| be the residual norm of approximating eigenpair at outer iteration kk. Then all the available cubic and quadratic convergence results require ΞΎk=O(βˆ₯rkβˆ₯)\xi_k=O(\|r_k\|) and ΞΎk≀ξ\xi_k\leq\xi with a fixed ΞΎ\xi not near one, respectively. Fundamentally different from these, we prove that the inexact RQI with MINRES generally converges cubically, quadratically and linearly provided that ΞΎk≀ξ\xi_k\leq\xi with a constant ΞΎ<1\xi<1 not near one, ΞΎk=1βˆ’O(βˆ₯rkβˆ₯)\xi_k=1-O(\|r_k\|) and ΞΎk=1βˆ’O(βˆ₯rkβˆ₯2)\xi_k=1-O(\|r_k\|^2), respectively. Therefore, the new convergence conditions are much more relaxed than ever before. The theory can be used to design practical stopping criteria to implement the method more effectively. Numerical experiments confirm our results.Comment: 27 pages, 4 figure

    Some Results on the Regularization of LSQR for Large-Scale Discrete Ill-Posed Problems

    Full text link
    LSQR, a Lanczos bidiagonalization based Krylov subspace iterative method, and its mathematically equivalent CGLS applied to normal equations system, are commonly used for large-scale discrete ill-posed problems. It is well known that LSQR and CGLS have regularizing effects, where the number of iterations plays the role of the regularization parameter. However, it has long been unknown whether the regularizing effects are good enough to find best possible regularized solutions. Here a best possible regularized solution means that it is at least as accurate as the best regularized solution obtained by the truncated singular value decomposition (TSVD) method. In this paper, we establish bounds for the distance between the kk-dimensional Krylov subspace and the kk-dimensional dominant right singular space. They show that the Krylov subspace captures the dominant right singular space better for severely and moderately ill-posed problems than for mildly ill-posed problems. Our general conclusions are that LSQR has better regularizing effects for the first two kinds of problems than for the third kind, and a hybrid LSQR with additional regularization is generally needed for mildly ill-posed problems. Exploiting the established bounds, we derive an estimate for the accuracy of the rank kk approximation generated by Lanczos bidiagonalization. Numerical experiments illustrate that the regularizing effects of LSQR are good enough to compute best possible regularized solutions for severely and moderately ill-posed problems, stronger than our theory predicts, but they are not for mildly ill-posed problems and additional regularization is needed.Comment: 20 pages, 7 figures. arXiv admin note: text overlap with arXiv:1503.0393

    Modified Truncated Randomized Singular Value Decomposition (MTRSVD) Algorithms for Large Scale Discrete Ill-posed Problems with General-Form Regularization

    Full text link
    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: min⁑βˆ₯Lxβˆ₯{\min} \|Lx\| subject to min⁑βˆ₯Axβˆ’bβˆ₯{\min} \|Ax - b\|, where LL is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to AA. We use rank-kk truncated randomized SVD (TRSVD) approximations to AA by truncating the rank-(k+q)(k+q) RSVD approximations to AA, where qq is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as kk increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms.Comment: 26 pages, 6 figure

    On inner iterations of Jacobi-Davidson type methods for large SVD computations

    Full text link
    We make a convergence analysis of the harmonic and refined harmonic extraction versions of Jacobi-Davidson SVD (JDSVD) type methods for computing one or more interior singular triplets of a large matrix AA. At each outer iteration of these methods, a correction equation, i.e., inner linear system, is solved approximately by using iterative methods, which leads to two inexact JDSVD type methods, as opposed to the exact methods where correction equations are solved exactly. Accuracy of inner iterations critically affects the convergence and overall efficiency of the inexact JDSVD methods. A central problem is how accurately the correction equations should be solved so as to ensure that both of the inexact JDSVD methods can mimic their exact counterparts well, that is, they use almost the same outer iterations to achieve the convergence. In this paper, similar to the available results on the JD type methods for large matrix eigenvalue problems, we prove that each inexact JDSVD method behaves like its exact counterpart if all the correction equations are solved with lowΒ orΒ modestlow\ or\ modest accuracy during outer iterations. Based on the theory, we propose practical stopping criteria for inner iterations. Numerical experiments confirm our theory and the effectiveness of the inexact algorithms.Comment: 30 pages, 3 figure

    A Residual Based Sparse Approximate Inverse Preconditioning Procedure for Large Sparse Linear Systems

    Full text link
    The SPAI algorithm, a sparse approximate inverse preconditioning technique for large sparse linear systems, proposed by Grote and Huckle [SIAM J. Sci. Comput., 18 (1997), pp.~838--853.], is based on the F-norm minimization and computes a sparse approximate inverse MM of a large sparse matrix AA adaptively. However, SPAI may be costly to seek the most profitable indices at each loop and MM may be ineffective for preconditioning. In this paper, we propose a residual based sparse approximate inverse preconditioning procedure (RSAI), which, unlike SPAI, is based on only the {\em dominant} rather than all information on the current residual and augments sparsity patterns adaptively during the loops. RSAI is less costly to seek indices and is more effective to capture a good approximate sparsity pattern of Aβˆ’1A^{-1} than SPAI. To control the sparsity of MM and reduce computational cost, we develop a practical RSAI(toltol) algorithm that drops small nonzero entries adaptively during the process. Numerical experiments are reported to demonstrate that RSAI(toltol) is at least competitive with SPAI and can be considerably more efficient and effective than SPAI. They also indicate that RSAI(toltol) is comparable to the PSAI(toltol) algorithm proposed by one of the authors in 2009.Comment: 18 pages, 1 figur

    A Transformation Approach that Makes SPAI, PSAI and RSAI Procedures Efficient for Large Double Irregular Nonsymmetric Sparse Linear Systems

    Full text link
    A sparse matrix is called double irregular sparse if it has at least one relatively dense column and row, and it is double regular sparse if all the columns and rows of it are sparse. The sparse approximate inverse preconditioning procedures SPAI, PSAI(toltol) and RSAI(toltol) are costly and even impractical to construct preconditioners for a large sparse nonsymmetric linear system with the coefficient matrix being double irregular sparse, but they are efficient for double regular sparse problems. Double irregular sparse linear systems have a wide range of applications, and 4.4\% of the nonsymmetric matrices in the Florida University collection are double irregular sparse. For this class of problems, we propose a transformation approach, which consists of four steps: (i) transform a given double irregular sparse problem into a small number of double regular sparse ones with the same coefficient matrix A^\hat{A}, (ii) use SPAI, PSAI(toltol) and RSAI(toltol) to construct sparse approximate inverses MM of A^\hat{A}, (iii) solve the preconditioned double regular sparse linear systems by Krylov solvers, and (iv) recover an approximate solution of the original problem with a prescribed accuracy from those of the double regular sparse ones. A number of theoretical and practical issues are considered on the transformation approach. Numerical experiments on a number of real-world problems confirm the very sharp superiority of the transformation approach to the standard approach that preconditions the original double irregular sparse problem by SPAI, PSAI(toltol) or RSAI(toltol) and solves the resulting preconditioned system by Krylov solvers.Comment: 20 pages, 4 figure

    On Regularizing Effects of MINRES and MR-II for Large-Scale Symmetric Discrete Ill-Posed Problems

    Full text link
    For large scale symmetric discrete ill-posed problems, MINRES and MR-II are often used iterative regularization solvers. We call a regularized solution best possible if it is at least as accurate as the best regularized solution obtained by the truncated singular value decomposition (TSVD) method. In this paper, we analyze their regularizing effects and establish the following results: (i) the filtered SVD expression are derived for the regularized solutions by MINRES; (ii) a hybrid MINRES that uses explicit regularization within projected problems is needed to compute a best possible regularized solution to a given ill-posed problem; (iii) the kkth iterate by MINRES is more accurate than the (kβˆ’1)(k-1)th iterate by MR-II until the semi-convergence of MINRES, but MR-II has globally better regularizing effects than MINRES; (iv) bounds are obtained for the 2-norm distance between an underlying kk-dimensional Krylov subspace and the kk-dimensional dominant eigenspace. They show that MR-II has better regularizing effects for severely and moderately ill-posed problems than for mildly ill-posed problems, and a hybrid MR-II is needed to get a best possible regularized solution for mildly ill-posed problems; (v) bounds are derived for the entries generated by the symmetric Lanczos process that MR-II is based on, showing how fast they decay. Numerical experiments confirm our assertions. Stronger than our theory, the regularizing effects of MR-II are experimentally shown to be good enough to obtain best possible regularized solutions for severely and moderately ill-posed problems.Comment: 25 pages, 11 figures. arXiv admin note: text overlap with arXiv:1503.0186

    An Approach to Making SPAI and PSAI Preconditioning Effective for Large Irregular Sparse Linear Systems

    Full text link
    We investigate the SPAI and PSAI preconditioning procedures and shed light on two important features of them: (i) For the large linear system Ax=bAx=b with AA irregular sparse, i.e., with AA having ss relatively dense columns, SPAI may be very costly to implement, and the resulting sparse approximate inverses may be ineffective for preconditioning. PSAI can be effective for preconditioning but may require excessive storage and be unacceptably time consuming; (ii) the situation is improved drastically when AA is regular sparse, that is, all of its columns are sparse. In this case, both SPAI and PSAI are efficient. Moreover, SPAI and, especially, PSAI are more likely to construct effective preconditioners. Motivated by these features, we propose an approach to making SPAI and PSAI more practical for Ax=bAx=b with AA irregular sparse. We first split AA into a regular sparse A~\tilde A and a matrix of low rank ss. Then exploiting the Sherman--Morrison--Woodbury formula, we transform Ax=bAx=b into s+1s+1 new linear systems with the same coefficient matrix A~\tilde A, use SPAI and PSAI to compute sparse approximate inverses of A~\tilde A efficiently and apply Krylov iterative methods to solve the preconditioned linear systems. Theoretically, we consider the non-singularity and conditioning of A~\tilde A obtained from some important classes of matrices. We show how to recover an approximate solution of Ax=bAx=b from those of the s+1s+1 new systems and how to design reliable stopping criteria for the s+1s+1 systems to guarantee that the approximate solution of Ax=bAx=b satisfies a desired accuracy. Given the fact that irregular sparse linear systems are common in applications, this approach widely extends the practicability of SPAI and PSAI. Numerical results demonstrate the considerable superiority of our approach to the direct application of SPAI and PSAI to Ax=bAx=b.Comment: 25 pages, 2 figure
    • …
    corecore